专利摘要:
Techniques for cyber warning are provided. One technique includes a cyber warning receiver (CWR). The CWR includes a bus sensing circuit to sense traffic on a communications bus over time, an anomaly detection circuit to detect anomalous behavior in perceived bus traffic, a data fusion circuit to fuse anomalous behavior detected in groups that have similar characteristics, a decision-making circuit to decide whether the merged anomalous behavior is normal or abnormal, and a behavior logging circuit to record the detected anomalous behavior in an electronic storage device. In one embodiment, the CWR additionally includes a behavior warning circuit to alert an operator to cast anomalous behavior identified as abnormal. In one embodiment, the communications bus is a built-in communications bus, such as a MIL-STD-1553 bus, and the CWR is an independent device configured to connect to the MIL-STD-1553 bus as a bus monitor.
公开号:BR112019026645B1
申请号:R112019026645-3
申请日:2018-06-05
公开日:2021-06-08
发明作者:Patrick M. Hayden;Jeong-O Jeong;Vu T. Le;Christopher C. Rappa;Sumit Ray;Katherine D. Sobolewsk;David K. Woolrich Jr.
申请人:Bae Systems Information And Electronic Systems Integration Inc;
IPC主号:
专利说明:

Description field
[001] This description refers to a cyberwarning receiver. Fundamentals of the Invention
[002] Recent world events demonstrate that no industry is immune to the disruptive effects of cyber attacks. Systems systems architecture, commonly used in both information systems and defense weapons systems, provides a greater opportunity for software vulnerabilities to spread the negative effects of cyber attacks throughout the system. Abnormal behavior of a system or subsystem is often attributed to faulty equipment or software. Post-mission failure analysis traditionally focuses on system functionality rather than determining whether a cyber adversary is responsible for the abnormal behavior. Despite the demonstrated and growing threat of cyberattack against legacy commercial and military platforms, these systems currently do not support passive monitoring, active defense, or forensic data collection capabilities focused on enhancing cybersecurity. These systems are not well suited to existing cyber intrusion detection or prevention technologies due to their prevalent use of communications buses and networks that are not standard in traditional Information Technology (IT) environments. Furthermore, current IT industry approaches involving signature-based detection are not adequate for threat mitigation in the highly critical applications served by these platforms. Existing techniques can only identify a threat after it has been initially observed and categorized into another system (eg compromised). Defending against zero-day attacks, which can leverage vulnerabilities, exploits, techniques, and code entirely unknown to defenders, is crucial to commercial and government security. Brief Description of Drawings
[003] Features of the embodiments of matter claimed will become apparent as the following Detailed Description proceeds, and by reference to the drawings, in which like numbers represent like parts.
[004] Figure 1 is a schematic diagram illustrating an example of avionics communication system to implement one or more modalities of the present description.
[005] Figure 2 is a schematic diagram illustrating an example of the cyber warning receiver system (CWR) for detecting and recording anomalous bus traffic, according to an embodiment of the present description.
[006] Figure 3 is a schematic diagram illustrating an example of the cyber warning receiver system (CWR) for detecting and recording anomalous bus traffic, according to another embodiment of the present description.
[007] Figure 4 is a block diagram of an example of CWR, according to an embodiment of the present description.
[008] Figure 5 is a flowchart illustrating an example of computer-based cyber warning method, according to an embodiment of the present description.
[009] Figure 6 is a schematic diagram illustrating an example of neural network-based anomaly sensor to analyze bus traffic and an alert generator based on the partially observable Markov decision process (POMDP) to decide whether the analyzed bus traffic is normal or abnormal, according to an embodiment of the present description.
[0010] Figure 7 is a schematic diagram illustrating an example of neural network to analyze bus traffic, according to an embodiment of the present description.
[0011] Figure 8 is a schematic diagram illustrating a general POMDP and an example POMDP according to an embodiment of the present description.
[0012] Fig. 9 is a schematic diagram illustrating an example POMDP for deciding whether anomalous bus traffic data is normal or abnormal, according to an embodiment of the present description.
[0013] Figure 10 is a diagram illustrating an example of a Bayesian recursive estimator for use with a POMDP according to an embodiment of the present description.
[0014] Although the following detailed description proceeds with reference being made to the illustrative embodiments, various alternatives, modifications and variations thereof will be apparent to those skilled in the art in view of the present description. Detailed Description
[0015] In one or more embodiments of the present description, defense against zero-day attacks on an embedded bus system is provided for a cyber warning receiver (CWR) that uses an anomaly-based approach in which attacks reveal themselves through its side effects on the built-in bus. These side effects can include deviations induced from normal application and network behavior on the built-in bus. The anomaly-based approach can use a two-stage decision and classification process. Using machine learning techniques, normal bus traffic can be monitored to train one or more anomaly detectors to identify anomalous bus traffic. The anomalous behavior from these detectors can then be used to train a data fuser that merges anomalous data that share similar characteristics (such as being collected at the same time) to identify which anomalous behavior is normal and which is abnormal (eg, a possible cyberattack). The fuser can additionally use the anomaly observations from the various system components to construct a complete picture of the overall system anomaly and characterize an attack. Abnormal data can trigger alerts to operators as well as pinpoint the bus traffic of interest for post-mission/test analysis. Such CWR techniques protect legacy platforms and systems rather than focusing on platforms yet to be developed, which is critical to ensuring that organizations (such as military organizations) can operate in an environment denied to cyber using their legacy platforms and systems . Such techniques also provide practical and efficient solutions to cyber threats compared to other techniques, such as the strengthening of each of the (potentially thousands of) components that constitute modern systems and systems of systems.
[0016] Other network security techniques are not adapted to operate on the communications networks commonly employed by commercial and defense platforms. Additionally, other approaches to cybersecurity do not work, as they are signature-based and inadequate to protect systems from non-traditional systems. For example, other techniques, which use rule-based approaches to detection (such as antivirus software), would not identify an unknown attack as being malicious, as there will be no rule for doing so (eg, a known signature to be searched for) .
[0017] Most current cyber protection mechanisms in the industry are centered on information assurance and signature detection and will not provide adequate protection, for example, for aviation platforms that have built-in buses. Methods, such as anti-virus software, only protect systems against previously known threats, not zero-day (for example, first-time) attacks. Furthermore, antivirus software is not adequate to protect the various proprietary and non-standard systems that make up most weapon systems, as these systems do not share the components (and, by extension, the vulnerabilities) present in open systems. and standards that traditional antivirus software targets. Defending against zero-day attacks that can leverage vulnerabilities, exploits, techniques, and code entirely unknown to defenders can be challenging. Simple anomaly detection, such as checking the communications of each component in isolation, usually fails as it produces too many false positives to provide a reliable indicator.
[0018] Thus, in various embodiments of the present description, techniques for cyber warning are provided. One technique includes a cyber warning receiver (CWR). The CWR includes a bus sensing circuit to sense traffic on a communications bus over time, an anomaly detection circuit to detect anomalous behavior in perceived bus traffic, a data fusion circuit to fuse behavior anomalous detected in groups that have similar characteristics, a decision-making circuit to decide whether the anomalous fused behavior is normal or abnormal, and a behavior logging circuit to record the anomalous behavior detected in an electronic storage device. In one embodiment, the CWR additionally includes a behavior warning circuit to alert an operator to cast anomalous behavior identified as abnormal. In one embodiment, the communications bus is a built-in communications bus, such as a MIL-STD-1553 bus, and the CWR is an independent device configured to connect to the MIL-STD-1553 bus as a bus monitor.
[0019] By way of example, in some embodiments, the CWR uses an anomaly-based approach in which attacks reveal themselves through their side effects (ie, deviations induced from normal application and network behavior) . CWR uses one or more anomaly detectors that have deep insight into the processes they are monitoring, and accurately represent the complex latent variables that influence the observed measurements. The anomaly detector models the observable interactions between different components and captures a model for the impulse and responses of the system as a whole. Anomalous bus traffic is identified and then routed through a data fuser to group similar anomalous bus traffic records together. This grouping can then be used to decide whether any particular group is normal (eg, harmless) or abnormal (eg, something to worry about, such as a cyber attack). The CWR is an in-line replaceable unit that provides “hard” security to legacy systems not designed with cybersecurity requirements. In some embodiments, the CWR unit can be transparently added to a bus (such as an embedded bus) with minimal impact to the legacy system.
[0020] In some embodiments, CWR units couple other anomaly detection and data fusion capabilities with the multitude of communications technologies commonly employed by platforms to provide the missing cyber situational awareness. The CWR can monitor data on platform traffic from a bus monitor location, ensuring that bus traffic is monitored without interference with other systems operating on the bus. According to one or more modalities, during controlled training periods, the CWR characterizes traffic patterns and models the full range of normal system behaviors. After training, the CWR detects abnormal behavior between systems residing on the platform bus and takes appropriate action, such as alerting an operator of the anomalous behavior and concurrently recording the behavior for post-mission/test analysis. In some embodiments, fusion capabilities aggregate observations across the range of the system to infer and report the overall system security state. The CWR thus provides situational awareness, active defense and forensic data for unprecedented attacks, protecting the platform from malicious messages and data that would otherwise be transmitted on the bus. Overview
[0021] In one or more embodiments, a CWR is adapted to the MIL-STD-1553 (“1553”) communications bus architecture (for a serial data bus) commonly used in avionics platforms in the defense industry. However, the present description is not limited thereto. In other modalities, the CWR is adapted to other architectures, such as MIL-STD-1760 (electrical interface between a military aircraft and its weapons, electronic devices, disposable tanks and other storage), MIL-STD-1773 (fiber optic bus ), Embedded Aeronautical Radio (ARINC) standards (such as ARINC 429, directed to a serial avionics bus and the corresponding protocol), RS-232 (serial data transmission), RS-485 (serial communications drivers and receivers) , and other architectures relevant to both military and commercial platforms. For example, in some embodiments, applications extend beyond avionics networks to include mission networks, control system networks, and the like.
[0022] In one or more embodiments, a CWR employs anomaly detection and fusion algorithms in one or more communication networks, including non-traditional communication networks (such as military networks, network architectures no longer used by most computing resources and the like). In some embodiments, the CWR is implemented as an in-line replaceable unit on purpose-built hardware, but other embodiments are not so limited. For example, in one or more modalities, CWR algorithms and technologies can also be incorporated into existing platform hardware (eg, mission computers, existing bus monitors, to name a few), depending on the application. In some embodiments, a single end-to-end implementation of a CWR is provided. In other embodiments, parts of a communication system that incorporate the described CWR technology are provided.
[0023] Thus, and according to a modality, a technique for cyber warning and a CWR that uses this technique are provided. For ease of description, several modalities are described herein in terms of an avionics platform, such as a military aviation platform, and using a standard military bus architecture, such as MIL-STD-1553, for an embedded bus architecture that Supports avionics platform. However, the present description is not limited thereto and, in other embodiments, cyber warning techniques and CWRs are provided for use in other military or civilian equipment and communication architectures. The CWR has the ability to detect cyber attacks or detect cyber attacks in development. A CWR like this can be located, for example, on communication buses connected to weapon systems and can provide real-time and post-mission situational awareness, increasing the cyber resilience of these platforms.
[0024] In some embodiments, the CWR includes anomaly detection sensors. Such sensors can be machine learning sensors (such as neural network sensors), trained in real (eg uncommitted) bus traffic to look for the characteristics of normal bus traffic. Each sensor can look for different types of data that are inconsistent with known normal bus traffic. Bus traffic identified as anomalous by the anomaly detection sensors can be inserted into a data fusion mechanism to merge anomalous data events coming from different sensors (eg sharing the same time). Merged data can be identified by the data merge mechanism as normal (eg falling into previously observed anomalous data patterns, therefore probably harmless) or abnormal (eg not previously observed, therefore likely of interest, such like a cyber attack). Current weapon systems can benefit from capabilities both to detect cyberattacks in real time and to enable cyberattack analysis in a post-mission system. Knowing when and how a system is experiencing cyber attacks informs the next steps required for persistent cyber defense of military weapons systems. In one or more embodiments, a CWR located on aviation platforms (manned or unmanned) provides real-time cyber attack notification (such as alerting an operator of abnormal bus traffic) and the ability to conduct post-mission cyber analysis. .
[0025] Such a CWR performs important functions such as alerting operators of cyberattacks or developing cyberattacks against avionics weapon systems and enabling cybernetically relevant post-mission analysis of systems by recording anomalous behavior between the systems. A CWR such as this can be targeted implementation, such as on the highest risk or most critical systems (in consideration of supported platforms), and deployed in an efficient manner to improve or optimize available coverage. A CWR like this uses an anomaly detector to detect anomalous behavior on a communications bus. The anomaly detector is trained, for example, with artificial intelligence techniques, to detect anomalous bus traffic on the communications bus. For example, the anomaly detector can be trained, using normal bus traffic, to identify the patterns and characteristics present in normal bus traffic. The trained anomaly detector can then have its output fused (eg by a data fuser) into groups of anomalous events that share similar characteristics, when the data fuser can identify each group as normal (eg not a problem) or abnormal (eg probably a problem). A CWR like this also has cyber resilience, as well as the ability to prevent cyber attacks targeted at the CWR itself. Architecture and Methodology
[0026] Figure 1 is a schematic diagram illustrating an example of avionics communication system 100 for implementing one or more embodiments of the present description. Figure 2 is a schematic diagram illustrating an example of the cyber warning receiver system (CWR) 200 for detecting and recording anomalous bus traffic behavior, in accordance with an embodiment of the present description. Figure 3 is a schematic diagram illustrating an example of the cyber warning receiver system (CWR) for detecting and recording anomalous bus traffic, according to another embodiment of the present description. Figure 4 is a block diagram of an example CWR 400, in accordance with an embodiment of the present description. Figure 5 is a flowchart illustrating an example of the computer-based method 500 of cyber warning, in accordance with an embodiment of the present description. Method 500 and still other methods described herein can be implemented in hardware or software, or some combination of the two. For example, method 500 can be implemented by the CWR 400 of Figure 4. In another embodiment, method 500 can be implemented as a custom circuit, such as a field-programmable gate array (FPGA) configured to perform method 500 .
[0027] In some other embodiments, method 500 may be implemented as a series of computer instructions, such as software, embedded software, or a combination of the two, together with one or more computer processors (e.g., one or more microprocessors). The instructions, when executed on a given processor, cause method 500 to be performed. For example, in one or more embodiments, a computer program product is provided. The computer program product includes one or more non-transient machine-readable media (such as a compact disc, a DVD, a solid state drive, a hard disk, a RAM, a ROM, an on-chip processor cache, or the like ) encoded with instructions that, when executed by one or more processors, cause method 500 (or other method described herein) to be performed for cyber warning. Furthermore, although the methods described here may appear to have a certain order in their operations, other modalities may not be so limited. In this way, the order of operations can be varied between modalities, as will be apparent in light of this description.
[0028] In a similar light, the CWR 400 and other circuits described herein may be circuits in custom hardware or general purpose computer hardware configured (for example, through software, embedded software, custom logic, to name a few) to perform the tasks assigned to the circuit. Although circuits are illustrated as being made up of other circuits per function, in other embodiments, two or more circuits can be combined into a single circuit that realizes the functionality of the two or more circuits. In still other embodiments, a single circuit can be divided into two or more circuits, each performing the separate functions performed by the single circuit.
[0029] Referring to Figure 1, the communication system 100 includes a local area network (LAN) 110, such as Ethernet, and a communications bus 120, such as an embedded bus as in a MIL-STD network bus. -1553. LAN 110 may be an avionics systems LAN for communication between and between the components of communication system 100, such as with multi-purpose displays 132, 134, and 136. Multi-purpose displays 132, 134, and 136 may serve as the interfaces between avionics operators (such as the flight crew in an aircraft) and the computers, weapons systems (or other platform) and the like, which are controlled by avionics. Communication system 100 additionally includes system processors 142 and 144, which can process data from the various instrumentation, weapons, and even other platforms that make up the avionics and convert the processed data into images (such as 2D or 3D images) for display. on multi-purpose displays 132, 134, and 136. It should be noted that the number and types of components that make up communication system 100 may vary between modalities, and the number and type illustrated in Figure 1 is only an example. Other embodiments are not so limited, as will be apparent in light of the present description.
[0030] The communication system 100 additionally includes the display control units 152 and 154, which can manage the various displays, communications, weapons, instrumentation and other platforms of the avionics vehicle (e.g., a helicopter). For example, display units 152 and 154 can serve as data bus controllers for a MIL-STD-1553 serial bus architecture (with which communications bus 120 can conform), with display unit 152 being the primary data bus controller and the display unit 154 being a backup data bus controller. Communications bus 120 may be a 1553 serial bus, such as a MIL-STD-1553B bus. In some embodiments, communication bus 120 comprises two or more buses, for example, to provide redundancy in the event of damage or failure in one or more of the buses. Communication system 100 includes additional platform systems 162, 164, 166, and 168 for controlling weapons and other vehicle platforms with avionics. These platform systems 162, 164, 166, and 168 communicate over communications bus 120 (such as serial data bus 1553). The communication system 100 also includes a CWR 170 for sensing traffic on the communication bus 120, detecting anomalous behavior in perceived traffic, merging detected anomalous behavior into groups of anomalous events that share similar characteristics, logging behavior anomalous detected on a data storage device (such as a disk drive or solid state drive) and alerts an operator if the merged anomalous data appears as abnormal (such as a possible cyber attack). The data storage device may be part of the CWR 170, or may be accessible relative to the CWR 170, for example, on one of system processors 142 or 144 via communications bus 120, or via local area network 110 .
[0031] Referring to Figure 2, the CWR 200 system includes a communications bus 210 (such as a serial data bus configured to be driven in accordance with the MIL-STD-1553 architecture), a bus controller 220 for control communications transmitted over communications bus 210 according to a protocol by agreement (such as 1553), remote terminals 232, 234, 236, and 238 to perform various functions related to the avionics platform (such as weapons, other storage or instrumentation control), and a CWR 240 for sensing communication or other traffic on the communications bus 210, detecting anomalous behavior between traffic, merging the detected anomalous behavior into groups of similar events, logging anomalous behavior detected in a device electronic data storage, and alerts an operator if any of the merged groups have abnormal data (eg data with which to protect and occupy).
[0032] One of the remote terminals (such as remote terminal 234) may be compromised, and performing unusual (eg anomalous) communications on the communication bus 210. In this way, the CWR 240 detects the anomalous behavior, classifies it as abnormal, and records the detected behavior (such as a copy of the communication and a source of the communication, in this case the remote terminal 234) to a data storage device, such as a flash drive that is part of the CWR 240. For cyber resilience, the CWR 240 can only be configured to monitor or sense message traffic on communications bus 210, and not receive messages or other forms of potential electronic control via communications bus 210. As such, even though the remote terminal 234 may be compromised, it will be impossible for the compromised remote terminal 234 to take control of the CWR 240 or prevent the CWR 240 from fulfilling its detection and recording mission of anomalous behavior on the communication bus 210.
[0033] Referring to figure 3, the CWR 300 system is similar to the CWR 200 system of figure 2, however, the CWR 300 system includes two communications buses, viz., a first communications bus 310 and a second 360 communications bus (such as the serial data buses configured to drive according to the MIL-STD-1553 architecture) on the same avionics platform. The first communications bus 310 is connected to a bus controller 320, remote terminals 332, 334, 336 and 338, and the CWR 340, while the second communications bus 360 is connected to a bus controller 370 , on remote terminals 382, 384, 386, and 388, and on the CWR 390. For example, there may be more subsystems (and therefore remote terminals) than can be supported by a communications bus on the avionics platform, hence multiple Communication buses are needed to communicate and control the various systems. In the CWR 300 system of Figure 3, one component (eg, a computer, a display control unit, or the like) is shared between the two communication buses 310 and 360, and serves as the bus controller 370 for the second. communications bus 360 and remote terminal 332 for first communications 310. As such, bus controller 370 can act as a bridge between the two communications buses 310 and 360. Each communications bus has its own CWR (e.g., the CWR 340 or the CWR 390) for monitoring the bus traffic on its corresponding communications bus. In other embodiments, there may be three or more such communications buses and corresponding controllers, remote terminals and CWRs, all part of a larger CWR system.
[0034] One of the remote terminals (such as remote terminal 234) may be compromised, and making unusual (eg, anomalous) communications on the communication bus 210. In this way, the CWR 240 detects the anomalous behavior, classifies it as abnormal, and records the detected behavior (such as a copy of the communication and a source of the communication, in this case the remote terminal 234) to a data storage device, such as a flash drive that is part of the CWR 240. For cyber resilience, the CWR 240 can only be configured to monitor or sense message traffic on communications bus 210, and not receive messages or other forms of potential electronic control via communications bus 210. As such, even though the remote terminal 234 may be compromised, it will be impossible for the compromised remote terminal 234 to take control of the CWR 240 or prevent the CWR 240 from fulfilling its mission of detecting and recording anomalous behavior on the communications bus 210.
Referring to Fig. 4, CWR 400 receives as input bus traffic (such as message traffic) from a communications bus (such as bus 1553). For example, the CWR 400 can act as a communications bus monitor for monitoring traffic on the 1553 bus. More specifically, the CWR 400 can be a custom circuit (such as purpose-built), such as a field-programmable gate array ( FPGA). In other embodiments, the CWR 400 may be a processor or other computational circuit configured to execute code (such as software or embedded software) to perform the functions of a CWR technique in accordance with one or more embodiments of the present description. The monitored bus traffic is sensed by a bus sensing circuit 410, which groups the traffic into different messages and sends the messages to an anomaly detection circuit 420. The anomaly detection circuit 420 detects anomalous behavior in traffic from perceived bus (for example, by comparing perceived traffic to normal traffic to see if the perceived traffic is close enough to normal traffic that the perceived traffic is unlikely to be a cyberattack). Anomaly detection circuit 420 may include a plurality of anomaly detectors, each sensing different anomalous behavior in messages.
[0036] For example, in some embodiments, the anomaly detection circuit 420 is a neural network that has been configured (eg, trained) to recognize the normal bus traffic emanating from the components of the avionics platform. For example, the neural network can be trained to identify features or characteristics of normal communications on the communication bus for the particular avionics platform for which the neural network is being trained. In this way, when presented with new bus traffic that does not resemble (for example, does not share the resources or characteristics of) known normal communications, anomaly detection circuit 420 detects the new bus traffic as exhibiting anomalous behavior. In other embodiments, different machine learning frameworks (eg, support vector machines) are used to perform anomaly detection. In some embodiments, the anomaly detection circuit 420 includes numerous (eg, four) anomaly detectors, each operating independently and sensing different types of anomalies (eg, anomalies from different network layers or from different applications and congeners). Additional details of the anomaly detection circuit 420 and the detection of anomalous behavior are discussed below.
[0037] The anomaly detection circuit 420 can generate a significant amount of data. For example, it can be quite challenging to subject an avionics platform to all possible usage scenarios during platform testing (and corresponding training of the anomaly detection algorithm used in the 420 anomaly detection circuit). As such, anomaly detection can continue to identify the behavior as anomalous after training, even though the detected anomalies are, in fact, normal behavior, and can be recognized as such (and a corresponding circuit trained to recognize them as such. ) whether the detected anomalies were grouped into sets that have similar characteristics. Thus, in some embodiments, the CWR 400 also includes a data merging circuit 430 for merging the separate anomalies detected by the anomaly detection circuit 420 into groups. For example, a group like this is the same anomaly that occurs at different times (eg repeated anomalous behavior). Another such group comprises different anomalies that occur at the same time (eg, an instance of anomalous behavior that triggers numerous detectable side effects). Yet another such group comprises multiple different anomalous behaviors observed to originate in a single compromised host over a period of time.
[0038] Furthermore, in some embodiments, the CWR additionally includes a decision-making circuit 440 to further classify the different groups by the probability of being a cyber attack, such as being normal (probably not a cyber attack) or abnormal (probably a cyber attack). cyber attack, or at least bus traffic to worry about). For example, frequent and unusual communication bus requests can be signals, for example, of a perceived cyberattack lurking when an opportune time for the attack presents itself (and thus constitute abnormal bus requests). On the other hand, sporadic or single anomalies can be more symptomatic of a harmless technical error (and thus understand normal bus requests, albeit anomalous when considered in isolation). Cyber attacks are not normal operation, so their actions create multiple events that do not resemble normal behavior, and furthermore, such attacks share particular resources in their corresponding bus traffic. The decision-making circuit 440 thus distills much of the anomalous behavior into those particular events that are most likely to be of interest (eg, abnormal bus traffic), including those events that turn out to be normal behavior but stop. which the anomaly detection circuit 420 is inadequately trained to identify as such.
[0039] The decision making circuit 440 can be, for example, another neural network or another machine learning structure that sits on top of the anomaly detection circuit 420. The data fusion circuit 430 can group the instances of anomalous behavior detected by the anomaly detection circuit 420 in collections or patterns of instances that share common characteristics (such as common temporal or behavioral characteristics). If the platform is considered to be uncompromised, the anomalous bus traffic groups can then be considered as in fact normal, even though the anomaly detection circuit 420 may not be trainable to recognize them as such. For example, at the individual message level, such anomalous bus traffic may always appear as not found and therefore anomalous, while at the data merging level they may have the common characteristics that can be used to identify them as harmless (eg normal). As such, in some embodiments, decision-making circuit 440 is trained at a higher level, using anomalous bus traffic transmitted by anomaly detection circuit 420 (and merged by data fusion circuit 430) as input to train decision-making circuit 440 to recognize patterns in anomalous bus traffic that are likely to be harmless (e.g., normal).
[0040] In some embodiments, the decision-making circuit 440 is a specialist coded model of anomalous behavior characterization. Using known patterns of benign yet anomalous behavior, and perhaps known patterns of troubling behavior (and possibly cyberattack), an expert can program decision-making circuit 440 to classify the anomalous bus traffic patterns identified by the circuit of anomaly detection 420 (and merged by the 430 data fusion circuit) as both harmless and worrisome (eg worthy of alert to the avionics platform operator or especially identification for post-mission/test analysis by an expert as a possible cyberattack). In some embodiments, decision-making circuit 440 uses a partially observable Markov decision process to integrate the fused outputs of multiple anomaly detectors to increase reliability before notifying or alerting operators of potential cyber attacks.
[0041] The behavior logging circuit 450 then records any of these detected anomalous behaviors (for example, the identity of the remote terminal transmitting the anomalous bus traffic, part or all of the detected bus traffic and the like) into a 470 electronic storage device, such as a magnetic, optical, or solid-state drive. For example, logged data can be grouped into similar events identified by data fusion circuit 430. Thereafter, detected anomalous behavior can be transmitted, for example, to analysts or other analysis tools for further analysis and follow-up activities , or for further retraining of the anomaly detection circuit 420. Although the CWR 400 shows the storage device 470 as being part of the CWR 400, other modalities may not be so limited. For example, in some embodiments, storage device 470 may be part of another component or may be accessible from the communications bus, such as part of a processing unit connected to the communications bus.
[0042] In some embodiments, the detected anomalous behavior is transmitted by the merged groups by the data merging circuit 430. In this way, a detected anomalous bus traffic message group is identified by the decision-making circuit 440 as troubling (for example , abnormal or lacking a previously identified standard characteristic of a harmless, albeit anomalous, bus may be specially identified in the recorded output of behavior recording circuit 450. Such worrying group traffic may later be identified (or presumed) as harmless ( possibly by an expert) and possibly used to retrain the decision-making circuit 440.
[0043] Furthermore, the behavior warning circuit 460 can alert an operator (such as an operator of the avionics platform) about the detected anomalous behavior, particularly if the decision-making circuit 440 has identified the anomalous behavior as having a high probability of being a cyber attack (eg abnormal). For example, the behavior alert circuit 460 may send a message (such as over the communication bus, or as a wireless communication), activate an audible or visual indicator (such as a beep or a light) or the like, for an operator of the platform being protected by the CWR 400.
[0044] In some embodiments, the data merging circuit 430 and the decision making circuit 440 use a sensor merging and decision making structure for monitoring network traffic. For example, the data fusion circuit 430 can merge the anomalies detected by four different anomaly detectors into groups that have related anomalies (eg, similar symptoms, similar times, to name a few). In some embodiments, anomaly detection circuit 420 uses numerous (for example, hundreds or thousands of) distinct anomaly detection sensors for network traffic, each sensing a different anomaly or set of anomalies. In some embodiments, the data fusion circuit 430 and the decision-making circuit 440 use a sensor fusion and response structure that combines asynchronous multi-channel sensor reporting with continuous learning and autonomic response through a decision process. Markov partially observable. This allows the anomaly detection circuit 420, the data fusion circuit 430 and the decision making circuit 440 to handle the anomaly detection and decision making of network traffic at high speed, and present useful and timely alerts. able to the platform operators during the recording of useful information for the post-mission/test analysis.
[0045] In one or more embodiments, the anomaly detection circuit 420 applies a joint probability distribution over the observable traffic space generated by the system component interactions. These include, but are not limited to, the frequency, rate, volume and content of messages exchanged over a common bus. In order to avoid making incorrect considerations about the parametric form of these distributions, in some modalities, non-parametric learning through core density estimation (KDE) is used to train the anomaly detection circuit 420. So, when presented with With new observations, the CWR 400 identifies anomalous communication patterns and, using the previous observations, calculates the marginal distribution to estimate the expected response given the impulse. This method is a non-parametric method, such as one in which there are no a priori considerations about the structure of the underlying stochastic process (eg, Gaussian, multinomial, Bernoulli, or the like). Instead, in one or more modalities, for each sample, the joint probability is estimated based on its “proximity” to other previously observed samples. In some embodiments, the anomaly detection circuit 420, in conjunction with the data fusion circuit 430 and the decision-making circuit 440, will produce detection artifacts that provide human-readable policy language that allows post cyber analysis. -mission.
[0046] With respect to method 500 of Fig. 5, processing begins with the perception 510 of traffic on a communications bus (for example, a bus 1553). This can be accomplished, for example, using the bus perception circuit 410 of Figure 4. Processing continues with detection 520 of anomalous behavior in the perceived bus traffic. This detection 520 can be performed, for example, by the anomaly detection circuit 420 of Figure 4. The detection 520 can be performed by multiple anomaly detectors, each trained to detect a different type (or types) of behavior. anomalous. Once detected, processing continues with the 530 fusion of anomalous behavior detected in groups that share similar features (eg, temporal coincidence, similar side effects being observed, and congeners). For example, when there are multiple anomaly detectors, fusion 530 can include grouping the anomalous bus traffic by time (to see, for example, whether similar events lead to similar sets of anomalies being detected across multiple detectors), or by type (to see, for example, whether similar anomalies are being detected over time, which might be symptomatic of a cyberattack doing repeated anomalous behavior), or some other such criteria. Fusion 530 can be performed, for example, by the data fusion circuit 430 of Fig. 4.
[0047] Once detected and merged, processing continues with decision 540 whether the anomalous merged behavior is normal or abnormal. In one embodiment, decision 540 is performed using a partially observable Markov decision process. Decision 540 may be performed, for example, by decision making circuit 440 of Figure 4. Method 500 further includes recording 550 the anomalous behavior detected in an electronic storage device (such as storage device 470). Such register 550 may include information such as the device for sending or receiving (intended) the anomalous bus message. In some modalities, other fields used in the communication protocol for the bus are included in the logged data. Additionally, in some modalities, the body of the anomalous bus message is included in the logged data. Recording 550 can be performed, for example, by the behavior recording circuit 450 of Figure 4. Furthermore, processing continues with an operator alert 560 of the anomalous cast behavior identified as being abnormal. Alert 560 may take one or more of several forms, such as an audible signal or alarm, a message (data bus communication, email, text or the like), a visible signal (such as a light), to name some. Alert 560 can be realized, for example, by the behavior alert circuit 460 of Figure 4.
[0048] For a CWR to provide cyber protection, for example, for aviation platforms, in one or more modalities, the CWR itself is resilient to cyber attacks. The presence of a CWR in a weapon system can attract the attention of a cyber adversary, making it a potential target. For example, CWR can be implemented with a secure operating system and hardware design. This CWR can, for example, stop the most well-known types of cyberattacks, such as temporary storage overflows, code injections, and others. CWR can also implement an instruction-level root of trust in hardware that cannot be subverted by malicious or poorly written code. For example, in one modality, the root of trust for the CWR associates each piece of data in the system with a metadata tag that describes its provenance or its purpose (eg, “this is an instruction”, “this came from network"). Furthermore, the CWR can propagate metadata as instructions are executed and verify that policy rule compliance occurs throughout the computation.
[0049] The CWR can also exercise great flexibility to enforce foundational security policies, without arbitrary limitations on metadata size and number of supported policies. Under one modality, the hardware extensions in CWR enforce memory security, control flow integrity, and spot tracking. This helps protect the CWR by creating an almost insurmountable level of effort for cyber attackers to subvert.
[0050] Under some embodiments, the CWR is an independent in-line replaceable unit, which monitors bus traffic (such as bus traffic1553) from a location (such as a bus monitor location) attached to the communication bus, and no interference with other terminals or devices attached to the communication bus. From the bus monitor location, the CWR can detect abnormal behavior among other systems connected to the 1553 bus, alerting an operator when anomalous behavior is detected, as well as recording this behavior for post-mission/test analysis. Anomalous Behavior and Data Fusion
[0051] Additional details of anomalous behavior detection and data fusion will now be presented. For ease of description, they will be described in relation to the 1553 serial bus architecture, although the concepts are applicable to other bus architectures and other communication networks, as will be apparent in light of the present description. The MIL-STD-1553 network is a serial messaging interface that has a physical layer and a data link protocol for exchanging data (eg messages) between two or more terminals connected on a communication bus. The physical network topology can be conceived as flat (for example, all terminals are connected to and perceive the same bus signals). At least one of the terminals serves as a bus controller. There can be multiple bus controllers, with one of them acting as the bus controller and the others serving as backups when the bus controller is no longer able to perform the bus controller services. The remaining endpoints in the 1553 architecture include the remote endpoints (which are associated with the corresponding subsystems on the avionics platform) and the bus monitors (which monitor traffic on the communications bus but otherwise do not interfere with the communications bus or in protocol 1553). The remote terminals communicate (via messages) with each other and with the bus controller.
[0052] The types of attack available to a cyber attacker attempting to exploit the 1553 network depend on the specific anchor they reach on the avionics platform, but generally include: the attacker present on one or more systems outside the 1553 network , but leveraging the data sent or received through network 1553, the attacker present at a remote terminal connected to network 1553, the attacker present at a bus controller for network 1553, and the attacker present in multiple combinations of these anchors. Given this set of anchors, some of the possible types of cyberattacks include: methods by which a compromised bus controller impacts the system, methods by which a compromised remote terminal impacts the system, methods by which any compromised component connected to the network impacts the system, attacks that violate standard 1553 or the application layer, and attacks in which a compromised bus controller or remote terminal sends incorrect data to another bus controller or remote terminal.
[0053] Bus controllers have a large degree of control over a 1553 network. A compromised bus controller enables a high degree of control by the cyber attacker, such as enabling the attacker to initiate new messages, remove existing messages, or intercept and modify data in transit between remote terminals. Compromised remote terminals, on the other hand, can disrupt the network, for example, by initiating new messages on bus 1553 without coordination by the bus controller, by imitating a different remote terminal, or even by trying to become the bus controller. bus. A bus controller or a compromised remote terminal on network 1553 can deny messages between other remote terminals. Attacks can also violate the basic rules and conventions of the 1553 standard, or the application layer data they contain. Cyberattacks can also involve a compromised bus controller or remote terminal that deliberately sends the wrong data to another bus controller or remote terminal as part of the normal data exchange cycle. This can include, for example, measurement data, control commands, system status or other types of information.
[0054] An example of network 1553 can be modeled to contain four layers, from highest to lowest: application layer, transport layer, data link layer and physical layer. The data link layer (eg, word and message protocols) and physical layer (eg, hardware components and signal encoding) are formally defined in MIL-STD-1553. The upper layers, namely the application layer (for example, bus connector and remote terminal protocols and interface methods) and the transport layer (for example, message packets, sequencing, and rates) are layers logical and left to the designer of the avionics system to define (for example, including concepts such as message frequencies and physical bus patterns). Each layer has its own characteristics that can be monitored or observed, with the lower layers (data link and physical) being the most consistent between the different 1553 implementations and the higher layers (application and transport) being the most customized for the 1553 specific implementations. For example, lower layers, by virtue of their relative simplicity and state accuracy (eg, permissible commands) may be amenable to expert-coded detection to explicitly check for invalid states or commands. However, the upper layers, whose behavior can supposedly follow all the rules, however malicious in intent, such as a cyberattack, are significantly more challenging.
[0055] In this way, as the above-described cyber attacks occur on a 1553 network, they produce side effects that are observable by a high-fidelity bus monitor. For the purpose of organizing these observable side effects, network 1553 can be considered to include the four layers described above, the lowest being the physical layer, the next lowest being the data link layer, followed by the transport layer, and the highest layer being the application layer. The most basic layer is the physical layer, which is responsible for environmental compliance (eg concepts such as voltages, frequencies, signal-to-noise ratio (SNR), timing and parity). The physical layer contains observable elements in relation to the fundamental electrical environment necessary for the proper operation of the 1553 network. Certain cyberattacks can cause disturbances at this level, especially in cases where abuse of the 1553 bus causes message collisions.
[0056] The next layer up is the data link layer, which is responsible for conformance to the 1553 standard (for example, message protocol concepts such as word presence, word sequence, response timing, valid remote terminal and subaddress numbers, and field agreement (such as length, command and status)). The data link layer covers low-level implementation details of the 1553 protocol. At this level, for example, it can be detected that valid hosts and subaddresses are present, and that the expected message structure is intact, including the allowed message types and the expected word sequences. Some types of cyberattacks can cause changes to this ordering or produce multiple repeated copies of certain words in the message. In a normal (eg, non-compromised) system, typical request and response timings for 1553 transactions can be monitored at this level to provide examples of normal bus traffic at this layer.
[0057] The next level up is the transport layer, which is responsible for scheduling compliance (for example, concepts such as valid message set, rates, and sequencing, retry, redundancy behaviors, and asynchronous message management) . The transport layer defines platform-specific attributes regarding 1553 usage, such as number and length of message packets. Messages occurring in 1553 can be uniquely identified by attributes, including their type, source, destination, and length. In this layer, it can be verified that the system is using the set of messages whose occurrence is expected as part of the defined schedule, with the proper sequence and timing. The CWR can account for changes in this agenda that may result from different operational modes for the platform. At this level, it is possible to enforce that the relay or redundancy features that spread messages across multiple buses are performing as expected without abuse.
[0058] The top level is the application layer, which is responsible for application compliance (for example, concepts such as message structure, data range and correlation, and derivative range and correlation). Details at the application layer are specific to individual systems (eg, subsystems, applications, weapon systems, and other devices) in relation to their use of the serial data bus and their implementations. For example, a navigation device might transmit one type of data using message formats and data representations established by its developers, while a threat warning system might use a completely different representation for its data. The detection of the valid structure (for the corresponding application) is a useful observable element. When data fields are specified or can be identified differently, a set of normal behaviors can be observed based on their values. For example, data may be known to have a limited range of values, to exhibit a known distribution, or to have a limited rate at which it can change. In other cases, multiple data fields may show correlations, such as always moving together, or denying each other. Such data can be used to train an anomaly detection circuit through machine learning techniques (eg a neural network). Performance outside these standards may comprise indicators of a cyberattack.
[0059] A Cyber Warning Receiver (CWR) operates by monitoring traffic and discovering anomalies in the behavior of these observations and measurements. In one or more modalities, the example behavior (eg, “normal” behavior) for a target platform of interest can be characterized based on a set of measurements performed on an uncompromised system under normal circumstances, such as a platform of interest. avionics that has 1553 network specifications and specific inputs. Examples of specific entries include valid remote terminal and sub-address ranges in use, message schedules in each of the different operating modes, and observations from real-world data collections.
[0060] In general, the higher the network layer at which observations are collected and characterization is desired, the more specified is a solution for a particular attack, and the greater the amount of data that must be collected to establish normal behavior and to detect anomalies. By establishing normal behavior through collected observations, the observable side effects of cyber attacks (and which are agnostic regarding the details of specific attack implementation) can be leveraged to enable detection of attacks that have not been observed before (or pre-conceived ) by the defenders. For lower network layers, the number of possible attack approaches is limited, making it easy for experts in the field (SMEs) to explicitly define a comprehensive set of detectors. At these lower levels, detectors are also more portable than at higher levels. This simplifies the task of implementing cyber threat detection across platforms. However, although there are many advantages to monitoring the 1553 bus at lower levels, the observations derived from these layers are not sufficient in themselves. There are important classes of cyber attacks that do not produce observable impacts on these layers. For example, the manipulation of data coming from a given device will only be observable by changes in platform-specific messages that exist in the application layer, as would be the violation of the formatting of the application layer message. To characterize these forms of cyberattack through the application layer, and to be able to detect them quickly, more sophisticated anomaly detectors must be used.
[0061] Due to reasons, such as the large volume of data relationships that exist for all systems and messages through a complete platform (such as a defense platform 1553), all the specifics of the message formats of the security layer application and field locations for dozens of devices and hundreds of unique messages, and the discovery of subtle or secondary correlations that can evade the intuitions of human cyber defense experts and thus remain open to exploitation by malicious parties, the anomaly detector it must be created through automated techniques. For example, a CWR can be trained, as through machine learning, to recognize how a system should behave under normal operating conditions, and how this behavior will manifest itself in the various observable measurements described above. Advances in machine learning provide this capability, and address the challenges identified above. Powerful parameter estimation and model structure detection techniques from machine learning are beneficial for system identification. These capabilities help address the breadth of anomaly detection instances required to form a robust monitoring solution. Activity outside of what normal behavior models would expect will be considered anomalous and will become a data point for cyber attack investigation.
[0062] Modern machine learning approaches incorporate resource engineering and credit assignment as key elements. Deep machine learning techniques, for example, combine input observations (eg, values in each data field of message 1553) into more abstract aggregated features that, while no longer representing actual physical measurements, provide an excellent basis for taking decisions (eg, normal behavior or not). Machine learning can automatically select which learned features contribute to making such decisions and which are essentially irrelevant, and assign weights or credit to the various features in this way. For example, in addition to the growing predictive power of learning normality models, these features of appropriate machine learning approaches avoid the challenge of identifying the most important data fields in the 1553 application layer. regarding manual specification of data fields and their relative importance. Manual specification is inconvenient, especially considering that application layer message definitions may not exist in one place, but may be spread across multiple disparate interface description documents, each using different formats, which make them poorly adequate. to automated parsing.
[0063] Machine learning enables reasoning in relation to volumes of data much larger than would be possible for just a human expert. Anomaly detectors increase the visible range of subtle interactions and mutual patterns of behavior exhibited by disparate elements on bus 1553. These patterns may seem innocuous to cyber defense experts trying to predict attack vectors. However, these are exactly the oversights that are inevitably exploited. Finding instances of such subtle relationships enhances situational awareness. Interestingly, insight into such patterns can also prove beneficial to system evaluation and problem solving when non-attack anomalies surface.
[0064] Addressing the challenges highlighted above in thinking about platform security using deep data inspection at the application layer, machine learning can be a key enabler for cyber situational awareness. The use of machine learning is not exclusive to the application layer, however, and it is also useful at the lower protocol layers. For example, machine learning algorithms can learn the normal message schedule for the platform as a function of different operating modes, establish normal electrical signal levels at the physical layer, to name a few. Furthermore, these adaptive algorithms can help eliminate the need to tune and customize detection systems for each individual instance of the protected platform. Instead, they enable the implementation of applicable solutions through the entire platform park. Under some embodiments, anomaly detection includes multiple anomaly detectors, each trained to identify different types of anomalous behavior in bus traffic (for example, each detector can perceive anomalies at a different level of the network).
[0065] According to one or more modalities, the anomaly detection circuit 420 is built through an iterative machine learning process to train an algorithm that ingests data representative of the network platform for which the CWR must defend against cyber attacks, extract resources from representative data and construct representations of the expected behavior of extracted resources. For example, in some modalities, the bus data recorded during field tests of the target platform is used to do the initial training of the anomaly detection algorithm. This can include positive bus traffic samples (eg good data acquired from an uncompromised system) as well as negative bus traffic samples (eg data that has been specifically corrupted to represent a bad state, to train or additionally check the anomaly detectors to recognize bad states in addition to good states). Then, the anomaly detection circuit 420 is built based on the initially trained algorithm. At this point, additional field tests are carried out and the bus data recorded as anomalous behavior by the anomaly detection circuit 420 is analyzed (eg by an expert in the field) to determine whether the behavior is anomalous or instead. , a false positive by the anomaly detection circuit 420. In other embodiments, the recorded anomalous bus data is considered to be false positive (under the consideration that the field tests are operating in a normal, uncompromised state).
[0066] Additional training of the anomaly detection algorithm can be performed, such as with false positive data, with all newly acquired bus data, or with additionally acquired bus data (to name a few techniques) to train better the anomaly detection algorithm to identify the bus data as both normal (for example, matching the characteristics of previously acquired bus data under normal operating conditions) and as anomalous (for example, not matching the characteristics of the previously bus data purchased under normal operating conditions). Such additional training may be repeated through the above process until the rate of false positive data (or presumed false positive data) is reduced to an acceptable level (eg, low enough that the burden placed on experts to analyze the data identified as anomalous is acceptable, or in its capabilities). However, further processing of the anomalous data, such as through data fusion, may be necessary to address the false positive data that is identified by the anomaly detection circuit 420 regardless of how much the same training is subjected. Through data fusion, anomalous data can be further identified as normal (eg harmless) or abnormal (eg possible cyber attack, data to worry about).
[0067] In one or more embodiments of the present description, additional training data is acquired during the mission cycle for a given platform. Because such cycles are likely to expose the platform to even more normal behavior, the use of bus traffic logs collected post-mission will support incremental updates to training sets and learned behavior models. Distributing new models across different platform instances at regular intervals allows all protected platforms to continuously benefit from learning through collective data. In some embodiments, any of the newly acquired bus data that is identified as anomalous can be verified as false positive data (eg by an expert analyst) prior to further training (eg to help prevent the algorithm from crashing. anomaly detection is trained to recognize bus data coming from compromised systems as looking like it came from normal systems). With more data and collective knowledge, the performance of these machine-learning-based systems will continue to improve, providing a defense system that evolves with new threats, and adapts to defeat them.
[0068] As mentioned above, not every anomaly means that the platform is under attack (for example, some behavior identified as anomalous may simply be false positives). Systems are regularly moving in and out of new states and scenarios and experiencing abnormal conditions (eg situations whose corresponding bus data has not been recorded and used to train the anomaly detection algorithm before) that result from a range of incidental or failure modes. The key distinctions between false positives (eg, technical system errors, states not found from normal operation, and the like) and true positives (eg, cyber attacks) are the correlations that exist between the observations, and the story that the same count. Any individual cyberattack step is likely to generate a set of measurable side effects and artifacts, unlike any behavior encountered during normal (non-compromised) operation. Multiple such steps in sequence begin to form a picture of the attacker's current presence and goals in a cyberattack. Rather, individual anomalous events are more likely to be a technical error, unique phenomenon, or other false positive data.
[0069] One technique to address phenomena such as single anomalous events (and other false positive bus data) in relation to multiple anomalous events that have similar characteristics (which indicate a possible cyberattack) is to use a data fusion system (such as the 430 data fusion circuit) to put these pieces together. Data fusion formulates the best possible estimate of the state of the underlying system based on observations, and then determines the probability that any detected anomalies are caused, for example, by a fundamental failure, an engagement in a scenario, a mode of operation not previously featured or a cyberattack. As a precaution, and for proper post-mission analysis, any anomalous bus data (such as the raw bus data that triggers the anomaly detection circuit 420) can be captured and recorded (such as in the electronic storage device 470 ).
[0070] With data fusion, anomalous data from multiple anomaly sensors (eg at similar times) can be aggregated into groups, such as through a partially observable Markov decision process, to enhance the reliability of such anomalous behavior both as normal (albeit anomalous, such as technical errors or first-time occurrences of differently understood bus traffic) and as abnormal (for example, possible cyber attack, such as an unexplained anomalous bus traffic group that suits to any previously identified bus traffic on an uncompromised system).
[0071] In some embodiments, the CWR is configured as a passive device, which monitors the system for malicious activity and alerts operators of anything suspicious, but never actively interacting with the network. For example, CWR can be placed on a system to enable monitoring of all applicable buses. This option provides a degree of security from a regression testing standpoint and the likelihood that any performance impact of a CWR on mission critical activities will be reduced or minimized.
[0072] In other embodiments, the CWR is configured as an active device, such as being positioned in-line with the critical 1553 bus subsystems, prepared to take quick and decisive action to stop cyber attacks in its wake. Since cyber attacks can occur in the blink of an eye, active defense may, in some cases, be the only reasonable way to prevent an attack (such as an unforeseen attack) from occurring. However, an in-line device like this can be tricked by attackers into providing an inappropriate response, in effect becoming a part of the attack itself. In this way, in some modalities, design precautions are taken to ensure that attack suppression actions delivered by an inline CWR cannot generate consequences beyond what the original cyberattack would have achieved by itself.
[0073] Given its role, and especially when considered as part of an active defense setup, a CWR according to some modalities can itself become an attractive target for opponents. As part of cybersecurity operations on a platform, attackers can make it a priority to disable or interfere with CWR to enable their other goals. As such, in some embodiments, the CWR includes security-enhanced hardware and software through an active security development lifecycle that includes regular software repair.
[0074] Modern weapons platforms continue to reach new heights of interconnectivity and software-defined automation. With these improvements comes addressing the growing risks of cybersecurity. Evidence from the commercial and industrial sectors suggests that many of the observed access vectors and attack methods also apply to military platforms, with consequences that are potentially much more severe. Despite this reality, many modern weapons system platforms currently operate without sufficient means of providing detailed situational awareness in their cybersecurity states. In this way, in one or more modalities of the present description, survivability equipment that can monitor the platform's networks for malicious activity is provided. Network monitoring enables the short-term ability to detect or deter cyber attacks, which currently pose a very real threat.
[0075] The MIL-STD-1553 bus is identified as an excellent place to observe cyber attacks in progress. This bus is pervasive across both modern and legacy defense platforms, and forms the backbone for the exchanges of commands, state and data between operators and critical subsystems essential to a platform's function. A CWR according to an embodiment of the present description can monitor this bus for a range of malicious activity and attack types. This includes attacks that are being carried out to exploit the 1553 bus itself, as well as attacks that cause deviations from established system behavior norms for data crossing this bus. The CWR can measure platform layers based on the 1553 network over time and identify anomalous or malicious activity. CWR can implement detectors of two categories: explicit detection rules defined by experts in the field and models of system behavior derived using machine learning. The use of explicit detection rules enables monitoring of the 1553 data and physical link layers for anomalous activity that violates the 1553 standard, or does not agree with the basic attributes of the known system configuration. The use of learned system behaviors enables, for example, deep inspection of messages passing through the 1553 interface to verify that they are operating in the schedule, that expected correlations exist between various data fields, and that the data ranges and exchange rates are at their expected values.
[0076] Using a CWR like this, when a cyberattack occurs, the observations and anomalies that result are collected by an anomaly detector and examined using a data fusion process. This process estimates the critical security state of the platform and tracks the attacker's actions. When critical systems are involved or a risk to survivability is identified, the CWR can alert operators. Cyber warning capabilities form a key addition to the platform's survivability equipment suite, providing visibility into the cyber domain and keeping systems secure in the face of this emerging advanced threat.
[0077] Figure 6 is a schematic diagram illustrating an example of anomaly sensor based on neural network 600 to analyze bus traffic and an alert generator based on the partially observable Markov decision process (POMDP) 650 for decide whether the analyzed bus traffic is normal or abnormal, according to an embodiment of the present description. Anomaly sensor 600 includes a neural network 610 for inserting bus traffic messages 615 (such as MIL-STD-1553 messages) into input nodes (or neurons) 620 that constitute a first layer (or input layer) of the neural network 610. The inserted bus traffic (for example, the near-bus traffic message) causes part of the input nodes 620 to fire, sending the heavy signals over the corresponding connections (or axions or synapses) 625 to the nodes hidden 630 that constitute a second layer (or hidden layer) of the neural network 610. The heavy signals, in turn, cause some of the hidden nodes 630 to fire, sending the heavy signals through the corresponding connections 635 to the output nodes 640 that constitute a third layer (or last layer or output layer) of the neural network 610. The heavy signals, in turn, cause part of the output nodes 640 to fire, generating a state 645 based on the particular output nodes 640 what the they did.
[0078] The output state 645 is the useful information (eg classification, such as anomalous or not) returned by neural network 610 based on the message from input bus traffic 615. Neural network 610 can be trained to identify whether or not bus traffic messages exhibit anomalous behavior based on machine learning techniques that assign weights to connections 625 and 635. During a training phase, neural network sensor connections learn the subtle features of continuous streams of labeled message using machine learning. For example, during training, only a few fields in the messages end up playing any critical role in distinguishing normal or anomalous behavior. However, neural network 610 learns which fields are critical with minimal human effort based on training data (eg, normal bus traffic). Although the anomaly sensor 600 shows only one neural network, in some modalities, numerous neural networks 610 are present, each of which is trained to detect a different type of anomalous bus traffic. In this way, numerous states 645 can be generated from a single input message 615. Furthermore, although neural network 610 shows only one layer of hidden nodes 730, in other embodiments, neural networks can include two or more layers of us hidden. In addition to this, in other embodiments, alternative classifiers, such as support vector machines, can be applied in place of neural networks to achieve the anomaly detection function.
[0079] Output states 645 from this input message 615 (or input nearby messages 615 or similar input messages 615) can be merged to gather the anomalous data correlations that help identify whether the anomalous data is traffic normal bus traffic (but anomalous when each incoming 615 message or 645 neural network state is considered in isolation) or abnormal bus traffic (and thus potentially harmful, such as a cyber attack). The merged anomalous states can be entered into the 650 alert generator, which uses a POMDP 660-based controller to decide whether the merged anomalous states represent normal or abnormal bus traffic. By integrating multiple detectors and samples to improve event reliability, the POMDP 660 controller uses a stochastic state transition decision-making process with reinforcement learning to identify which merged groups of anomalous behavior events represent normal bus traffic and which represent abnormal (and possibly dangerous) bus traffic. The 650 alert generator then alerts an operator of the 1553 network-based platform of any anomalous merged behavior identified by the POMDP 660 controller as abnormal.
[0080] Figure 7 is a schematic diagram illustrating an example of neural network 710 for analyzing bus traffic, according to an embodiment of the present description. Neural network 710 can be used, for example, as a CWR neural network sensor. Neural network 710 inserts embedded bus traffic messages 715 (such as MIL-STD-1553 messages) into input nodes 720. In response to input message 715, part of input nodes 720 fires, sending the heavy signals through from the corresponding first connections 725 to the hidden nodes 730. The heavy signals, in turn, cause part of the hidden nodes 730 to fire, sending the heavy signals through the corresponding second connections 735 to the output nodes 740.
[0081] Neural network 710 is an example of a feed forward neural network (for example, processing moves in one direction from inputs to outputs and without any feedback). For example, each input node 720 in the input layer can represent a corresponding different byte of a fixed-length message (in this case, N = 162 input nodes for a 162-byte message, which can be used in a message packet. in a network 1553). The output layer includes two output nodes 740, which represent true and false (such as anomalous or not). The hidden layer has (N x 2) / 3 = 108 hidden nodes (ie two-thirds the number of input nodes), but this is just an example, and other modalities are not so limited. For example, the number of hidden nodes can be two-thirds times the number of input and output nodes. Neural network 710 training, in general, is about finding the best weights that can classify incoming 715 messages. In more detail, 710 neural network training includes presenting the training data to the neural network and then , use machine learning techniques, such as stochastic gradient descent and backpropagation, to determine the weights of the corresponding first and second connections 725 and 735 so that the neural network correctly identifies (for example, classifies) incoming messages as anomalous or not. Neural network 710 may be a simple network (for example, looking for just one type of anomaly), so a single hidden layer may suffice. In general, increasing the number of hidden layers makes the neural network more adaptable and trainable, but if the neural network task is simple enough, more hidden layers will not improve neural network accuracy.
[0082] In some modalities, there are two sets of training data: positive and negative. The positive training dataset includes messages found during normal avionics platform behavior (eg, not compromised). The neural network must identify these messages as not being anomalous. The negative training dataset, on the other hand, includes messages that represent invalid, compromised, or otherwise inconsistent states. The neural network must identify these messages as being anomalous. Negative training data can be obtained, for example, by deliberately corrupting positive training data, or by collecting messages while artificially forcing a bad state (for example, having a sensor that indicates that an aircraft has landed when, in fact, the aircraft is flying), or by supplying the data collected during real cyberattacks on a real platform, or the like. The positive and negative datasets can each be divided into two subsets, with, for example, two-thirds of the messages being used for training and the other third of the messages being used for validation. As positive training datasets can be relatively large (may include all bus messages perceived during normal operations) while negative training datasets can be relatively small (eg require processing or special circumstances to generate ), the negative training dataset can have its messages replicated in such a way that its size is comparable to that of the positive training dataset.
[0083] In one or more modalities, the training and validation of the neural network includes a training session of about 100 epochs using the stochastic gradient descent. Stochastic gradient descent randomly selects, for example, batches of 30 messages from the training datasets. A cost function can then be computed to minimize errors (eg, producing opposite responses from what was expected) for each batch and collectively for the entire training sets. For a simple net, the system can saturate at about 30 epochs (for example, accuracy reaches its best level, such as about 95 - 100% correct) during the training session. A validation session then uses the validation datasets to predict the outcome (using the trained neural network) against predefined labels (eg, positive or negative) assigned to each of the validation messages.
[0084] For an example of embedded bus environment, there may be some number (eg 12) of different message types that make up the bus traffic. However, a given anomaly can be present in only one of these message types. In this way, in one or more modalities, inputs from all 12 message types are provided in the training and validation datasets for the neural network that is in training to identify the anomaly. This makes the neural network more generic, as it can address all different message types even when looking for a specific anomaly in one of the message types.
[0085] Figure 8 is a schematic diagram illustrating a general POMDP 810 and a POMDP 820 according to an embodiment of the present description. As noted earlier, POMDP stands for partially observable Markov decision process. A POMDP is partially observable in that the sensors reveal some information about what the current state or states are, but there is no safety in this determination. Furthermore, a POMDP is Markov in which the model satisfies the Markovian property: the state of the system at time k depends only on the state at time k-1 and observations at time k. Additionally, a POMDP is a decision process in which it uses the estimation of the system's condition to make a decision about what action to take. A POMDP follows a state model in which the system condition is modeled by a set of states. However, the system is not “in a state” in the sense that the system is in such a state at any given time. Instead, the “state” of the system is a probability distribution across possible states. A POMDP can be characterized by three parameters, namely, a transition matrix P, an observation matrix H and a cost matrix G.
[0086] Fig. 9 is a schematic diagram illustrating an example POMDP 950 for deciding whether anomalous bus traffic data is normal or abnormal, according to an embodiment of the present description. Figure 10 is a diagram illustrating an example Bayesian recursive estimator for use with a POMDP according to an embodiment of the present description. Decision making circuit 950 includes a POMDP controller 960 for deciding whether the output of a neural network sensor 900 (trained to identify anomalous bus traffic) comprises normal bus traffic data or abnormal bus traffic data. The POMDP controller includes a Recursive Estimator and a Response Selector. With additional detail, at each stage the Recursive Estimator takes an observation zk, the last corrected state estimate Bk-1, and the last action taken, uk-1. This produces a new corrected state estimate Bk, which is used by the Response Selector to produce a new action uk.
[0087] For the decision-making circuit based on POMDP 950 of figure 9, the neural network sensor 900 is considered to classify the bus traffic messages as anomalous or not, while the POMDP assumes one of two states: normal or abnormal. Furthermore, different actions are not used in POMDP, instead there is a single “watch” action. Additional Example Modalities
[0088] The following examples refer to additional modalities, from which numerous permutations and configurations will be apparent.
[0089] Example 1 is a cyber warning receiver (CWR). The CWR includes a bus sensing circuit to sense traffic on a communications bus over time, an anomaly detection circuit to detect anomalous behavior in perceived bus traffic, a data fusion circuit to fuse behavior anomalous detected in groups that have similar characteristics, a decision-making circuit to decide whether the anomalous fused behavior is normal or abnormal, and a behavior logging circuit to record the anomalous behavior detected in an electronic storage device.
[0090] Example 2 includes the material from Example 1. The CWR additionally includes a behavior warning circuit to alert an operator to fused anomalous behavior identified as abnormal.
[0091] Example 3 includes the material from Example 2. Additionally, the behavior warning loop is configured to not alert the operator to the anomalous fused behavior identified as normal.
[0092] Example 4 includes the material from Example 1. Additionally, the communications bus is an embedded communications bus.
[0093] Example 5 includes matter from Example 4. Additionally, the built-in communications bus is a MIL-STD-1553 bus.
[0094] Example 6 includes the material from Example 5. Additionally, the CWR is a standalone device configured to connect to the MIL-STD-1553 bus as a bus monitor.
[0095] Example 7 includes the matter of Example 1. Additionally, the anomaly detection circuit includes a plurality of anomaly detection circuits configured to detect a corresponding plurality of different anomalous behaviors in the perceived bus traffic.
[0096] Example 8 includes the matter of Example 1. Additionally, the anomaly detection circuit is a neural network trained to detect anomalous behavior in perceived bus traffic.
[0097] Example 9 includes the matter of Example 1. Additionally, the data fusion circuit is configured to merge the detected anomalous behavior into groups that have similar perception times into their corresponding perceived bus traffic.
[0098] Example 10 includes the material from Example 1. Additionally, the decision-making circuit uses a partially observable Markov decision process (POMDP) to decide whether the anomalous fused behavior is normal or abnormal.
[0099] Example 11 is a computer-implemented cyber warning method. The method includes: perceiving, by a processor, traffic on a communications bus over time; detect, by the processor, the anomalous behavior in the perceived bus traffic; merge, by the processor, the anomalous behavior detected in groups that have similar characteristics; decide, by the processor, whether the anomalous fused behavior is normal or abnormal; and recording, by the processor, the anomalous behavior detected in an electronic storage device.
[00100] Example 12 includes the material from Example 11. Furthermore, the method additionally includes alerting, by the processor, an operator to the anomalous fused behavior identified as abnormal.
[00101] Example 13 includes the material from Example 12. Furthermore, the method additionally includes not alerting, by the processor, the operator to the anomalous fused behavior identified as normal.
[00102] Example 14 includes the matter of Example 11. Additionally, fusion includes merging detected anomalous behavior in groups that have similar perception times into their corresponding so-called perceived bus traffic.
[00103] Example 15 includes the material from Example 11. Additionally, the decision includes using a partially observable Markov decision process (POMDP) to decide whether the anomalous fused behavior is normal or abnormal.
[00104] Example 16 is a computer program product that includes one or more non-transient machine-readable media encoded with instructions that, when executed by one or more processors, cause a computer-implemented process to be performed for cyber warning . The process includes sensing traffic on a communications bus over time, detecting anomalous behavior in perceived bus traffic, merging detected anomalous behavior into groups that have similar characteristics, deciding whether the fused anomalous behavior is normal or abnormal, and record anomalous behavior detected on an electronic storage device.
[00105] Example 17 includes the material from Example 16. Furthermore, the process additionally includes alerting an operator to anomalous fused behavior identified as abnormal.
[00106] Example 18 includes the matter of Example 17. Furthermore, the process additionally includes not alerting the operator to anomalous fused behavior identified as normal.
[00107] Example 19 includes the matter of Example 16. Additionally, fusion includes merging detected anomalous behavior into groups that have similar perception times into their corresponding so-called perceived bus traffic.
[00108] Example 20 includes the matter of Example 16. Additionally, the decision includes using a partially observable Markov decision process (POMDP) to decide whether the anomalous fused behavior is normal or abnormal.
[00109] The terms and expressions used herein are used as descriptive rather than limiting terms, and there is no intention, in the use of such terms and expressions, to exclude any equivalent of the features shown and described (or parts thereof) , and it is recognized that several modifications are possible within the scope of the claims. In this way, the claims are intended to cover all such equivalents. Additionally, several features, aspects and modalities were described in this one. Features, aspects, and modalities are susceptible to combination with one another, as well as to variation and modification, as will be understood by those skilled in the art. The present description is, therefore, to be considered as covering such combinations, variations and modifications. It is intended that the scope of the present description be limited not by this detailed description, but rather by the appended claims. Future filed applications claiming priority to this application may claim subject matter described in a different manner, and may, in general, include any set of one or more elements, as varied as described or otherwise demonstrated herein.
权利要求:
Claims (16)
[0001]
1. Cyber warning receiver, characterized in that it comprises: a bus sensing circuit for sensing traffic on a vehicle communications bus during time; and passively monitoring traffic on the vehicle communications bus, but not receiving traffic, wherein the vehicle communications bus is an embedded serial bus or an optical bus; an anomaly detection circuit comprising a a plurality of anomaly detectors for detecting anomalous behavior, wherein the anomaly detectors employ a first trained neural network to identify and detect anomalous behavior in perceived bus traffic; and wherein the anomaly detectors employ rules with characteristics of the vehicle communications bus; a data fusion circuit to fuse the detected anomalous behavior of the perceived bus traffic into groups that have similar characteristics that share common temporal or behavioral patterns, into that the data fusion circuit applies nonparametric learning to produce fused detected anomalous behavior; a decision making circuit, in which the decision making circuit is a second neural network trained using both the anomalous behavior and the fused detected anomalous behavior to decide if there is a cyberattack, where the decision-making circuit uses a partially observable Markov decision process (POMDP); and a behavior logging circuit to record anomalous behavior detected in an electronic storage device and provide real-time cyber attack notification; wherein the cyber warning receiver is a standalone in-line device configured to connect to the vehicle communications bus as a bus monitor.
[0002]
2. Cyber warning receiver according to claim 1, characterized in that it further comprises a behavior warning circuit to alert an operator to the detected anomalous behavior fused identified as abnormal.
[0003]
3. Cyber warning receiver according to claim 2, characterized in that the behavior warning circuit is configured to refrain from alerting the operator to the detected anomalous fused behavior identified as normal.
[0004]
4. Cyber warning receiver according to claim 1, characterized in that the vehicle communications bus is a MIL-STD-1553 bus.
[0005]
5. Cyber warning receiver according to claim 4, characterized in that the cyber warning receiver is configured to connect to the MIL-STD-1553 bus.
[0006]
6. Cyber warning receiver according to claim 1, characterized in that the anomaly detection detectors are configured to detect a corresponding plurality of different anomalous behaviors in the perceived bus traffic.
[0007]
7. Cyber warning receiver according to claim 1, characterized in that the data fusion circuit is configured to merge the detected anomalous behavior into groups that have similar perception times in their corresponding perceived bus traffics.
[0008]
8. Cyber warning receiver according to claim 1, characterized in that the cyber warning receiver employs training periods that characterize traffic patterns and models a range of normal system behaviors to establish groups having similar characteristics.
[0009]
9. Computer-implemented cyber warning method, characterized by the fact that the method comprises: perceiving, by a bus sensing circuit, traffic on a vehicle communications bus over time and passively monitoring traffic in the vehicle communications bus, but having no receiving traffic, wherein the vehicle communications bus is a built-in serial bus or an optical bus; detecting, by an anomaly detection circuit comprising a plurality of anomaly detectors for detecting anomalous behavior in perceived bus traffic, in which a first neural network is trained to identify and detect anomalous behavior in perceived bus traffic, and in which anomaly detectors employ rules with characteristics of the vehicle communications bus; by a data fusion circuit, the detected anomalous behavior of the bus traffic noticed in groups that have similar characteristics, that share common temporal or behavioral patterns, in which the data fusion circuit applies non-parametric learning to produce fused detected anomalous behavior; decide, by a decision-making circuit, whether the detected anomalous behavior fused is normal or abnormal, in that the decision-making circuit is a second neural network independent of the first neural network and is trained using both the anomalous behavior and the fused detected anomalous behavior to decide if there is a cyber attack, and in which circuit decision-making uses a partially observable Markov decision process (POMDP); and recording, by a behavior recording circuit, the anomalous behavior detected in an electronic storage device and providing real-time cyber attack notification; wherein the cyber warning receiver is an independent in-line device configured to connect to the communication bus of vehicle as a bus monitor.
[0010]
10. Method according to claim 9, characterized in that it further comprises alerting, by a behavior warning circuit, an operator for the detected fused anomalous behavior identified as abnormal.
[0011]
11. Method according to claim 10, characterized in that it further comprises refraining from alerting, by the behavior warning circuit, the operator for the detected anomalous fused behavior identified as normal.
[0012]
12. Method according to claim 11, characterized in that the merger comprises merging the anomalous behavior detected in groups that have similar perception times in their corresponding perceived bus traffic.
[0013]
13. Non-transient machine readable media, characterized by the fact that it includes, encoded itself, instructions that, when executed by one or more processors of a cyber warning receiver, cause a computer-implemented process to be performed for cyber warning , the process comprising: perceiving, by a bus sensing circuit, traffic on a vehicle communications bus over time and passively monitoring traffic on the vehicle communications bus, but not receiving traffic, wherein the vehicle communications bus is an embedded serial bus or an optical bus; detecting, by an anomaly detection circuit comprising a plurality of anomaly detectors for detecting anomalous behavior in traffic of the perceived bus, wherein a first network neural is trained to identify and detect anomalous behavior in perceived bus traffic; and wherein the anomaly detectors employ rules with characteristics of the vehicle communications bus; merge, by a data fusion circuit, the detected anomalous behavior of the perceived bus traffic into groups that have similar characteristics, that share temporal or behavioral patterns common, in which the data fusion circuit applies nonparametric learning to produce fused detected anomalous behavior; decide, by a decision-making circuit, whether the fused detected anomalous behavior is normal or abnormal; where a second neural network, independent of the first neural network, is trained using both the anomalous behavior and the merged detected anomalous behavior to decide if there is a cyberattack, where the decision-making circuit uses a partially observable Markov decision process (POMDP); and recording, by a behavior recording circuit, the detected anomalous behavior merged into an electronic storage device and providing real-time cyber attack notification; wherein the cyber warning receiver is an independent in-line device configured to connect to the communications bus vehicle as a bus monitor.
[0014]
14. Non-transient machine readable media according to claim 13, characterized by the fact that the process further comprises alerting an operator to the detected anomalous fused behavior identified as abnormal.
[0015]
15. Machine-readable, non-transient media according to claim 14, characterized in that the process further comprises refraining from alerting the operator to the detected anomalous fused behavior identified as normal.
[0016]
16. Non-transient machine readable media according to claim 13, characterized in that the merger comprises merging the detected anomalous behavior into groups that have similar perception times in their corresponding perceived bus traffics.
类似技术:
公开号 | 公开日 | 专利标题
BR112019026645B1|2021-06-08|cyber warning receiver, computer implemented cyber warning method, and non-transient machine readable media
US20190260786A1|2019-08-22|Artificial intelligence controller orchestrating network components for a cyber threat defense
Schmittner et al.2014|Security application of failure mode and effect analysis |
Vellaithurai et al.2014|CPIndex: Cyber-physical vulnerability assessment for power-grid infrastructures
Ratasich et al.2019|A roadmap toward the resilient internet of things for cyber-physical systems
KR20180015640A|2018-02-13|Method and apparatus for security management in a computer network
Sethi et al.2018|Handling adversarial concept drift in streaming data
Hughes et al.2013|Quantitative metrics and risk assessment: The three tenets model of cybersecurity
US10749890B1|2020-08-18|Systems and methods for improving the ranking and prioritization of attack-related events
Zhou et al.2018|Anomaly detection methods for IIoT networks
Elfeshawy et al.2013|Divided two-part adaptive intrusion detection system
Ahanger2018|Defense scheme to protect IoT from cyber attacks using AI principles
Wagner et al.2015|Agent-based simulation for assessing network security risk due to unauthorized hardware
Sheng et al.2021|A cyber-physical model for SCADA system and its intrusion detection
Tukur et al.2021|Edge‐based blockchain enabled anomaly detection for insider attack prevention in Internet of Things
Möller2020|Cybersecurity in Digital Transformation: Scope and Applications
Molina et al.2009|Evaluating attack resiliency for host intrusion detection systems
US20210034753A1|2021-02-04|Method and system for neural network based data analytics in software security vulnerability testing
Sharma et al.2012|Intrusion detection system using bayesian approach for wireless network
El Farissi et al.2016|The analysis performance of an intrusion detection systems based on neural network
Li2005|An approach to graph-based modeling of network exploitations
Priest et al.2015|Agent-based simulation in support of moving target cyber defense technology development and evaluation
Zonouz2011|Game-theoretic intrusion response and recovery
Thakore2015|A quantitative methodology for evaluating and deploying security monitors
Hayden et al.2017|Providing cyber situational awareness on defense platform networks
同族专利:
公开号 | 公开日
US20180367553A1|2018-12-20|
AU2018282722A1|2020-01-16|
BR112019026645A2|2020-06-30|
CA3067350A1|2018-12-20|
WO2018231576A3|2020-03-26|
WO2018231576A2|2018-12-20|
EP3639504A2|2020-04-22|
US10728265B2|2020-07-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6769066B1|1999-10-25|2004-07-27|Visa International Service Association|Method and apparatus for training a neural network model for use in computer network intrusion detection|
US20040006473A1|2002-07-02|2004-01-08|Sbc Technology Resources, Inc.|Method and system for automated categorization of statements|
EP1535422A4|2002-08-09|2007-04-11|Bae Systems Advanced Informati|Control systems and methods using a partially-observable markov decision process |
US20070056038A1|2005-09-06|2007-03-08|Lok Technology, Inc.|Fusion instrusion protection system|
ES2421166T3|2006-08-21|2013-08-29|Trudell Medical International|Device for training the resistance of the respiratory musculature and method for its use|
EP3651437B1|2012-03-29|2021-02-24|Arilou Information Security Technologies Ltd.|Protecting a vehicle electronic system|
JP2015009570A|2013-06-26|2015-01-19|富士重工業株式会社|Headlight lamp control device|
US9401923B2|2013-10-23|2016-07-26|Christopher Valasek|Electronic system for detecting and preventing compromise of vehicle electrical and control systems|US10691573B2|2017-04-20|2020-06-23|The Boeing Company|Bus data monitor|
US10467174B2|2017-04-20|2019-11-05|The Boeing Company|System and method of monitoring data traffic on a MIL-STD-1553 data bus|
US10685125B2|2017-04-20|2020-06-16|The Boeing Company|Multiple security level monitor for monitoring a plurality of MIL-STD-1553 buses with multiple independent levels of security|
US10990669B2|2018-10-09|2021-04-27|Bae Systems Controls Inc.|Vehicle intrusion detection system training data generation|
US11201884B2|2018-11-30|2021-12-14|Raytheon Company|Bus monitoring system for detecting anomalies indicative of malfunctions or cyber-attacks|
US10802937B2|2019-02-13|2020-10-13|United States Of America As Represented By The Secretary Of The Navy|High order layer intrusion detection using neural networks|
US10523342B1|2019-03-12|2019-12-31|Bae Systems Information And Electronic Systems Integration Inc.|Autonomous reinforcement learning method of receiver scan schedule control|
US11204884B1|2019-06-13|2021-12-21|Rockwell Collins, Inc.|Adapter for synthetic or redundant remote terminals on single 1553 bus|
法律状态:
2021-04-27| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04L 29/06 Ipc: H04L 29/06 (2006.01), G06F 21/55 (2013.01), ... |
2021-05-04| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-06-08| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 05/06/2018, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US15/624444|2017-06-15|
US15/624,444|US10728265B2|2017-06-15|2017-06-15|Cyber warning receiver|
US15/624,444|2017-06-15|
PCT/US2018/035951|WO2018231576A2|2017-06-15|2018-06-05|Cyber warning receiver|
[返回顶部]